924 research outputs found

    An analysis of the correlation between profitability and liquidity of beverage companies in Chinese market

    Get PDF
    The relationship between profitability and liquidity is still controversial since many researchers obtain different conclusions from their studies. Besides, the Chinese market is getting more and more attention and profitability and liquidity are crucial for Chinese companies to keep operation management. Consequently, this thesis analyses the relationship between liquidity, company size, corporation growth and profitability of beverage companies in Chinese market between 2015 and 2017. Desk research can be used to getting the financial data published by the companies; the relationship was studied with the help of ratio analysis, horizontal analysis, description analysis, Pearson correlation coefficient and also a regression analysis. Surprisingly it was observed weak or insignificant positive correlation between liquidity, company size, corporation growth and profitability on the short run, contradicting the main literature. The conclusions are limited to the periods examined and the sample companies. Key words: Profitability, liquidity, Chinese beverage companies, Working capital, Current Ratio, Company Size and Corporation Growth DOI: 10.7176/RJFA/11-12-17 Publication date:June 30th 202

    Partial Dependency of Vowel Reduction on Stress Shift: Evidence from English -ion Nominalization

    Get PDF
    While classic theories utilize the comparison between cómp[ɛ]nsate going to comp[ə]nsátion and cond[ɛ́]nse going to cond[ɛ]nsátion to argue that stressed vowels are immune to reduction in multiple affixations (e.g., SPE), this paper presents a corpus-based case study that looks into this quantitative interaction between vowel reduction and stress shift during English -ion nominalization and offers discoveries that go against the classic claim. After analyzing 1,047 verb-noun target pairs extracted from the CELEX2 dictionary corpus, this study claims that vowel reduction only partially depends on its stress-bearing feature and that the suffix type, the stress shift pattern, vowel tenseness, and crucially some lexically specific constraints also predict vowel reduction. This finding is further supported by an OT analysis and a statistical model. As a quantitative study that relies on an exhaustive list of English samples to derive theoretical analysis, this research not only provides new insights into this long-lasting debate but also aims to highlight the significance of incorporating large data samples for a complete understanding of phonological phenomena

    Entanglement-guided architectures of machine learning by quantum tensor network

    Full text link
    It is a fundamental, but still elusive question whether the schemes based on quantum mechanics, in particular on quantum entanglement, can be used for classical information processing and machine learning. Even partial answer to this question would bring important insights to both fields of machine learning and quantum mechanics. In this work, we implement simple numerical experiments, related to pattern/images classification, in which we represent the classifiers by many-qubit quantum states written in the matrix product states (MPS). Classical machine learning algorithm is applied to these quantum states to learn the classical data. We explicitly show how quantum entanglement (i.e., single-site and bipartite entanglement) can emerge in such represented images. Entanglement characterizes here the importance of data, and such information are practically used to guide the architecture of MPS, and improve the efficiency. The number of needed qubits can be reduced to less than 1/10 of the original number, which is within the access of the state-of-the-art quantum computers. We expect such numerical experiments could open new paths in charactering classical machine learning algorithms, and at the same time shed lights on the generic quantum simulations/computations of machine learning tasks.Comment: 10 pages, 5 figure

    De re interpretation in belief reports: An experimental investigation

    Get PDF
    Determiner phrases (DPs) under intensional operators give rise to multiple interpretations, known as the de re/de dicto ambiguity. Formal theoretical approaches to modeling this ambiguity must rely on nuanced semantic judgments, but inconsistent judgments in the literature suggest that informal judgment collection may be insufficient. In addition, little is known about how these ambiguities are resolved in context and how preferences between these readings vary by context and across individuals, etc. We reported three controlled experiments to systemize the truth-value judgment collection of de re/de dicto readings. While the de dicto readings were robustly accepted by nearly all English speakers, de re readings exhibited strongly bimodal judgments, suggesting an inherent disagreement among speakers. In addition, the acceptability of de re judgments was affected by the DP's internal structure as well as idiosyncratic scenarios. More broadly, our experimental results lend support to the practice of including quantitative data collection within semantics

    Does the Outcome of the US-China Trade War Meet the Purpose?

    Get PDF
    Nowadays, as the two largest economies in the world, China and US economic relations have expanded substantially over the past decades. China is the US's second-largest merchandise trading partner, third-largest export market, and the most significant source of imports. (Li, He & Lin, 2018). During Trump’s presidency, he advocated tariffs to reduce the deficit of the US and promote domestic manufacturing. Our interest is to see whether the Trade War reduces the US’s trade deficit with China and whether it is indispensable. People might ask, hasn’t this topic been studied before? The answer is yes, and no. Our group cogitate and propose a fresh idea, which is the Treatment Effect analysis put forward by Hsiao, Ching, and Wan. This method is our most preferred option since it is considered as one of the ‘simplest’ and ‘accurate’ method in the treatment effects estimation literature. We run the related regression and predict the counterfactual trade deficit between the US and China as if the Trade War had not existed. Then we calculate the average difference between the actual value and the predicted ones. Finally, we conclude that instead of reduces, the Trade War increases 1.5 million of the US’s trade deficit with China. We hope that this freshly new method with the latest data will give out new cognitions of the Trade War for people interested in this area and inspire future research in this field

    Representing Affect Information in Word Embeddings

    Get PDF
    A growing body of research in natural language processing (NLP) and natural language understanding (NLU) is investigating human-like knowledge learned or encoded in the word embeddings from large language models. This is a step towards understanding what knowledge language models capture that resembles human understanding of language and communication. Here, we investigated whether and how the affect meaning of a word (i.e., valence, arousal, dominance) is encoded in word embeddings pre-trained in large neural networks. We used the human-labeled dataset as the ground truth and performed various correlational and classification tests on four types of word embeddings. The embeddings varied in being static or contextualized, and how much affect specific information was prioritized during the pre-training and fine-tuning phase. Our analyses show that word embedding from the vanilla BERT model did not saliently encode the affect information of English words. Only when the BERT model was fine-tuned on emotion-related tasks or contained extra contextualized information from emotion-rich contexts could the corresponding embedding encode more relevant affect information

    Query2GMM: Learning Representation with Gaussian Mixture Model for Reasoning over Knowledge Graphs

    Full text link
    Logical query answering over Knowledge Graphs (KGs) is a fundamental yet complex task. A promising approach to achieve this is to embed queries and entities jointly into the same embedding space. Research along this line suggests that using multi-modal distribution to represent answer entities is more suitable than uni-modal distribution, as a single query may contain multiple disjoint answer subsets due to the compositional nature of multi-hop queries and the varying latent semantics of relations. However, existing methods based on multi-modal distribution roughly represent each subset without capturing its accurate cardinality, or even degenerate into uni-modal distribution learning during the reasoning process due to the lack of an effective similarity measure. To better model queries with diversified answers, we propose Query2GMM for answering logical queries over knowledge graphs. In Query2GMM, we present the GMM embedding to represent each query using a univariate Gaussian Mixture Model (GMM). Each subset of a query is encoded by its cardinality, semantic center and dispersion degree, allowing for precise representation of multiple subsets. Then we design specific neural networks for each operator to handle the inherent complexity that comes with multi-modal distribution while alleviating the cascading errors. Last, we define a new similarity measure to assess the relationships between an entity and a query's multi-answer subsets, enabling effective multi-modal distribution learning for reasoning. Comprehensive experimental results show that Query2GMM outperforms the best competitor by an absolute average of 5.5%5.5\%. The source code is available at \url{https://anonymous.4open.science/r/Query2GMM-C42F}

    Adaptive Data Augmentation for Contrastive Learning

    Full text link
    In computer vision, contrastive learning is the most advanced unsupervised learning framework. Yet most previous methods simply apply fixed composition of data augmentations to improve data efficiency, which ignores the changes in their optimal settings over training. Thus, the pre-determined parameters of augmentation operations cannot always fit well with an evolving network during the whole training period, which degrades the quality of the learned representations. In this work, we propose AdDA, which implements a closed-loop feedback structure to a generic contrastive learning network. AdDA works by allowing the network to adaptively adjust the augmentation compositions according to the real-time feedback. This online adjustment helps maintain the dynamic optimal composition and enables the network to acquire more generalizable representations with minimal computational overhead. AdDA achieves competitive results under the common linear protocol on ImageNet-100 classification (+1.11% on MoCo v2).Comment: Accepted by ICASSP 202

    Mechanism and application of Lactobacillus in type 2 diabetes-associated periodontitis

    Get PDF
    Type 2 diabetes mellitus (T2DM) accelerates the progression of periodontitis through diverse pathways. Abnormal immune responses, excessive activation of inflammation, increased levels of advanced glycation end products, and oxidative stress have defined roles in the pathophysiological process of T2DM-associated periodontitis. Furthermore, in the periodontium of diabetic individuals, there are high levels of advanced glycation end-products and glucose. Meanwhile, progress in microbiomics has revealed that dysbacteriosis caused by T2DM also contributes to the progression of periodontitis. Lactobacillus, owing to its fine-tuning function in the local microbiota, has sparked tremendous interest in this field. Accumulating research on Lactobacillus has detailed its beneficial role in both diabetes and oral diseases. In this study, we summarize the newly discovered mechanisms underlying Lactobacillus-mediated improvement of T2DM-associated periodontitis and propose the application of Lactobacillus in the clinic
    corecore